Chaitanya Ekanadham 1 Neural coding basics

نویسنده

  • Chaitanya Ekanadham
چکیده

1 Neural coding basics 1.1 Definitions 1. Response function ρ(t) = j δ(t − t j) where t j are the spikes 2. Firing rate r(t) = lim ∆t→0 1 ∆t t+∆t t ρ(s)ds 3. Spike count n = T 0 ρ(t)dt 4. Average firing rate˜r = 1 T T 0 ρ(t)dt = n T 5. Tuning curve f (s) = avg firing rate given a parameterized stimulus s (typically plotted as a histogram and then fit to some functional form) 6. Stimulus/Response correlation Q rs (τ) = 1 T T 0 r(t)s(t + τ)dt 7. Spike-triggered average C(τ) = 1 n T 0 ρ(t)s(t − τ)dt ≈ 1 n T 0 r(t)s(t − τ)dt 8. Stimulus autocorrelation Q ss (t) = 1 T T 0 s(t)s(t + τ)dt 9. Response autocovariance/autocorrelation Q ρρ (t) = 1 T T 0 (ρ(t) − ˜ r)(ρ(t + τ) − ˜ r)dt 10. Rate code-code in which all stimulus information is conveyed via the firing rate function r(t) which does not vary at faster timescales than the stimulus s(t) (Note: a rate code can still be very spike-time-dependent if the stimulus is very fast-varying thus inducing a fast-varying rate-the key is that if the stimulus is slow-varying than the rate function should not be so time-sensitive). 11. Pulse/temporal code-code in which information is carried in variations of the rate function that are on shorter timescales than the fastest stimulus variation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Back to the basics: Bayesian extensions of IRT outperform neural networks for proficiency estimation

Estimating student proficiency is an important task for computer based learning systems. We compare a family of IRTbased proficiency estimation methods to Deep Knowledge Tracing (DKT), a recently proposed recurrent neural network model with promising initial results. We evaluate how well each model predicts a student’s future response given previous responses using two publicly available and on...

متن کامل

Topic-specific parser design in an air travel natural language understanding application

In this paper we contrast a traditional approach to semantic parsing for Natural Language Understanding applications in which a single parser captures a whole application domain, with an alternative approach consisting of a collection of smaller parsers, each able to handle only a portion of the domain. We implement this topic-specific parsing strategy by fragmenting the training corpus into su...

متن کامل

Sparse deep belief net models for visual area V2

1 Motivated in part by the hierarchical organization of the neocortex, a number of recently proposed algorithms have tried to learn hierarchical, or “deep,” structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in ...

متن کامل

Continuous basis pursuit and its applications

Transformation-invariance is a major source of nonlinear structure in many real signal ensembles. To capture this structure, we develop a methodology for decomposing a signal into a sparse linear combination of continuously transformed features. The central idea is to approximate the manifold(s) of transformed features(s) by linearly combining interpolation functions using constrained coefficie...

متن کامل

Hierarchical spike coding of sound

Natural sounds exhibit complex statistical regularities at multiple scales. Acoustic events underlying speech, for example, are characterized by precise temporal and frequency relationships, but they can also vary substantially according to the pitch, duration, and other high-level properties of speech production. Learning this structure from data while capturing the inherent variability is an ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008